30 research outputs found

    A new empirical challenge for local theories of consciousness

    Get PDF
    Local theories of consciousness state that one is conscious of a feature if it is adequately represented and processed in sensory brain areas, given some background conditions. We challenge the core prediction of local theories based on recently discovered long-lasting postdictive effects demonstrating that features can be represented for hundreds of milliseconds in perceptual areas without being consciously perceived. Unlike previous empirical data aimed against local theories, proponents of local theories cannot explain these effects away by conjecturing that subjects are phenomenally conscious of features that they cannot report. Only a strong and counterintuitive version of this claim can account for long-lasting postdictive effects. Although possible, we argue that adopting this strong version of the “overflow hypothesis” would have the effect of nullifying the weight of the evidence taken to support local theories of consciousness in the first place. We also discuss several alternative explanations that proponents of local theories could offer

    Building perception block by block: a response to Fekete et al

    Full text link
    Is consciousness a continuous stream, or do percepts occur only at certain moments of time? This age-old question is still under debate. Both positions face difficult problems, which we proposed to overcome with a 2-stage model, where unconscious processing continuously integrates information before a discrete, conscious percept occurs. Recently, Fekete et al. criticized our model. Here, we show that, contrary to their proposal, simple sliding windows cannot explain apparent motion and related phenomena within a continuous framework, and that their supervenience argument only holds true for qualia realists, a philosophical position we do not adopt

    Capsule networks as recurrent models of grouping and segmentation

    Get PDF
    Funding: AD was supported by the Swiss National Science Foundation grant n.176153 “Basics of visual processing: from elements to figures”. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Data Availability: The human data for experiment 2 and the full code to reproduce all our results are available here: https://github.com/adriendoerig/Capsule-networks-as-recurrent-models-of-grouping-and-segmentation.Peer reviewedPublisher PD

    Global and high-level effects in crowding cannot be predicted by either high-dimensional pooling or target cueing

    Get PDF
    Acknowledgments The authors thank Ruth Rosenholtz for her detailed comments on this manuscript and for sharing the code of the TTM. We thank both reviewers for their insightful comments. A.B. was supported by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 785907 (Human Brain Project SGA2) and No. 945539 (Human Brain Project SGA3). O.H.C. was supported by the Swiss National Science Foundation (SNF) 320030_176153 “Basics of visual processing: from elements to figures.” A.D. was supported by the Swiss National Science Foundation grants No. 176153 “Basics of visual processing: from elements to figures” and No. 191718 “Towards machines that see like us: human eye movements for robust deep recurrent neural networks.” D.W. was supported by the National Institutes of Health grant R01 CA236793.Peer reviewedPublisher PD

    Crowding and the Architecture of the Visual System

    No full text
    Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), a kind of deep neural network inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. First, there are flagrant architectural differences between biological systems and the classic framework. For example, recurrence is abundant in the brain but absent from the classic framework and ffCNNs. Although there is widespread agreement about the importance of these recurrent connections, their computational role is still poorly understood. Second, these architectural differences lead to behavioural differences too, highlighted by psychophysical evidence. Relatedly, ffCNNs are extremely vulnerable to small changes to their inputs and do not generalize well beyond the dataset used to train them. Human vision, in contrast, is much more robust. New insights are needed to face up to these challenges. In this thesis, I use visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. I focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. I show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, I show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. I provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, I use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. I psychophysically characterize the temporal structure of this interference and propose a simple computational model. My results support the idea that perception is a discrete process. I lay out theoretical implications of these findings. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations

    Quasi-continuous unconscious processing precedes discrete conscious perception

    No full text
    Perception seems to be a continuous stream and, for this reason, we often implicitly assume that perception is continuous. In fact, many models of vision rely explicitly and implicitly on continuous perception. However, continuous perception is challenged by many phenomena such as apparent motion, in which we do not perceive first a static disk and then another static disk but a smooth trajectory, favoring discrete accounts. What is the sampling rate of discrete perception? Usually, the sampling rate is determined by temporal resolution. If we cannot perceive two flashes of light presented 40ms after each other, discrete sampling cannot be faster than 40ms. However, different paradigms have shown evidence for sampling rates ranging from 3ms to 300ms challenging discrete models. To overcome these challenges, first, we propose a two-step model, in which a quasi-continuous unconscious processing stage with a high temporal resolution precedes conscious discrete perception, occurring at a much lower rate, in the range of 400ms. Second, we provide evidence for this model from a set of TMS and visual masking experiments. Finally, we compare a series of mathematical models with each other and show that one stage models, continuous or discrete, cannot explain the experimental results, thus, further favoring two stage models of perception

    A new empirical challenge for local theories of consciousness

    No full text
    Local theories of consciousness state that one is conscious of a feature if it is adequately represented and processed in sensory brain areas, given some background conditions. We challenge the core prediction of local theories based on long-lasting postdictive effects demonstrating that features can be represented for hundreds of milliseconds in perceptual areas without being consciously perceived. Unlike previous empirical data aimed against local theories, localists cannot explain these effects away by conjecturing that subjects are phenomenally conscious of features that they cannot report. We also discuss alternative explanations that localists could offer

    Why our best theories of perception lead to Anti-reductionism

    No full text
    Basic percepts and observation sentences, such as "the voltmeter is at 7A", provide the ground truth for realistic theories. Reduction is the second backbone of these theories linking, for example, neuroscience to physics. First, we will show, by mathematical proof, that reduction is impossible if ontology is complex. We will provide a toy example which illustrates this point: a hypothetical animal has a sensor, which reacts to red and green light only. When red light is presented, the animal deterministically lifts the right back limb. For green light, the left one. Inputs and outputs are causally linked by a “brain” with just a few (binary) neurons. Even though inputs, outputs, and brain activity are fully available for millions of observations of a “scientist”, it is impossible to decode the output from the brain activity. Hence, neither sensation nor motor actions can be reduced to the underlying neural activity- even though input and output are perfectly correlated. Next, we outline the challenges any perceptual system needs to meet. For example, the light (luminance), which arrives at a photo-receptor of the retina, is a combination of the light shining on an object (illuminance) and the material properties of the object (reflectance). For a given luminance value, there are infinitely many illuminance-reflectance pairs, giving rise to this luminance value (an ill-posed problem). Hence, perception cannot be based on the raw input values. Second, we show how perceptual systems can solve such ill-posed problems. One conclusion of this analysis is that perception is inherently subjective, i.e., the metric of the perceptual system is not isomorphic to the metric of the physical space. We will argue that perception has evolved subjective metrics to exactly cope with the abundant complexity of the physical, non-reducible external world. In conclusion, we propose that reduction is neither possible nor desirable

    Quasi-Continuous Unconscious Processing precedes Discrete Conscious Perception

    No full text
    Consciousness seems to be a smooth, continuous stream of percepts. We are aware of the world at each single moment of time. However, intriguing illusions suggest that the world is not continuously translated into conscious perception. Instead, consciousness seems to operate in a discrete manner, just like movies appear continuous although they consist of discrete images. However, simple snapshot theories are at odds with the excellent temporal resolution of human vision. How can we perceive fast motion in the millisecond range when consciousness occurs only every half second? Here, we propose a novel conceptual 2-stage framework where features of objects, such as their motion and color, are quasi-continuously and unconsciously analyzed with high temporal resolution. Temporal features, such as motion and duration, are processed like any other feature (stage 1). When unconscious processing is ‘completed’, all features are simultaneously rendered conscious at discrete moments in time, sometimes even hundreds of milliseconds after stimuli were presented (stage 2). We show that this framework is supported by experiments using trans-cranial magnetic stimulation (TMS) and visual masking. Finally, we show conceptually why continuous consciousness is an untenable position

    The Unfolding Argument: Why Recurrent Processing Cannot Explain Consciousness

    No full text
    Neuroscientific theories aim to explain paradigm cases of consciousness such as masking, binocular rivalry or the transition from dreamless sleep to wakefulness. The most popular theories are based on computational principles. Recurrent processing is a key feature of many of these theories, such as Information Integration Theory (Tononi, 2004), Lamme’s (2006) Theory of Recurrent Processing, and Grossberg’s Adaptive Resonnance Theory (2017). Here, we point to a mathematical result proving that recurrent processing in fact cannot explain paradigm cases of consciousness. It is well established that both recurrent and feedforward networks can implement any input-output mapping. For example, if a recurrent network can explain visual masking, then there exists a feedforward network that explains masking as well. Hence, recurrent processing is not necessary to explain paradigm cases. Recurrent processing is not sufficient either since, for example, the kneejerk reflex is unconsciously processed in a mono-synaptic recurrent loop. There is a double dissociation between paradigm cases of consciousness and recurrent processing. For example, in IIT, consciousness is quantified by a number, φ. Feedforward systems have φ=0 (they are unconscious) and recurrent systems have φ>0 (they are conscious). Although the human brain indeed has very high φ, there is a feedforward (φ=0) network, which shows the same input-output characteristics in the paradigm cases. Hence, φ is not necessary to explain the paradigm cases. Conversely, although unconscious processes such as regulating blood pressure can indeed be implemented in feedforward (φ=0) systems, they can also be implemented recurrently (φ>0). Hence, φ is not sufficient for consciousness either. We propose that many computational theories are too under-constrained to explain consciousness. We suggest that the best way to proceed is to philosophically and scientifically unearth the commonalities between paradigm cases
    corecore